System Failure

 

Description:

How does a (social) system know it is failing?


How does a (social) system–which is a system based on response–be aware that it is failing?

How would a system, even if self-aware—like a dog or cat—know that it was failing? It would have to have some reference system, some “absolute”, by which it would be able to judge variance over time. To predict, or project, whether the change over time is in a positive or negative direction when unsure of the lay of the land in which you are travelling, is an almost impossible task.

One would need a fixed point–called the anchor–within some closed system—which, if itself variant, would mean that the point, though fixed relative to the system, would be changing according to the inner machinations of the system itself. So, a prerequisite is a well-defined, bounded system; a fixed point within the system, as well as the ability (mathematical and/or other) to be able to “capture” the movement of this point.

I suggest we need a guidance system of some nature. One that will have the knowledge of the greater environment–the cosmos, the personal place–the point, and the system within which the point is fixed–one could refer to it as the community. Besides all that, what could be thought of as this guidance system? Science, with all its (material) benefits is no guidance system for our journey through the cosmos. It will provide us with the tools and the means to build the ship, and even include a guidance system—but have no idea where to navigate too. Something like the radio that science and its progeny, technology, have provided us. It can receive (and possibly transmit) along many channels, but it cannot decide which channel you should use—nor the message that will be carried along that channel.

Would not that be the case with AI? Is not self-awareness a key characteristic of sentient beings? And one of the characteristics of this self-awareness that is particular to human beings is the ability to be aware of a higher order? In other words, an awareness of the possibility of evolution? Enabling him to be able to be aware of a system failing—whether internally or externally—and take appropriate measures, i.e., adapt, adjusting as necessary, to prevent or repair, i.e., heal, the system so that it might continue to function. Would AI be able to do that?

In the computer field, this is referred to as “error correction”.